76 research outputs found

    Low-level neural auditory discrimination dysfunctions in specific language impairment—A review on mismatch negativity findings

    Get PDF
    Abstract In specific language impairment (SLI), there is a delay in the child’s oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN) in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception.Abstract In specific language impairment (SLI), there is a delay in the child’s oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN) in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception.Peer reviewe

    Auditory evoked potentials to speech and nonspeech stimuli are associated with verbal skills in preschoolers

    Get PDF
    Children's obligatory auditory event-related potentials (ERPs) to speech and nonspeech sounds have been shown to associate with reading performance in children at risk or with dyslexia and their controls. However, very little is known of the cognitive processes these responses reflect. To investigate this question, we recorded ERPs to semisynthetic syllables and their acoustically matched nonspeech counterparts in 63 typically developed preschoolers, and assessed their verbal skills with an extensive set of neurocognitive tests. P1 and N2 amplitudes were larger for nonspeech than speech stimuli, whereas the opposite was true for N4. Furthermore, left-lateralized P1s were associated with better phonological and prereading skills, and larger P1s to nonspeech than speech stimuli with poorer verbal reasoning performance. Moreover, left-lateralized N2s, and equal-sized N4s to both speech and nonspeech stimuli were associated with slower naming. In contrast, children with equal-sized N2 amplitudes at left and right scalp locations, and larger N4s for speech than nonspeech stimuli, performed fastest. We discuss the possibility that children’s ERPs reflect not only neural encoding of sounds, but also sound quality processing, memory-trace construction, and lexical access. The results also corroborate previous findings that speech and nonspeech sounds are processed by at least partially distinct neural substrates.Peer reviewe

    N1 and mismatch-negativity are two spatiotemporally distinct components: Further evidence for the N1 hypothesis of auditory distraction.

    Get PDF
    Event-related potentials (ERPs) were recorded for ignored tones presented during the retention interval of a delayed serial recall task. The mismatch negativity (MMN) and N1 ERP components were measured to discern spatiotemporal and functional properties of their generation. A nine-token sequence with nine different tone pitches was more disruptive than an oddball (two-token) sequence, yet this oddball sequence was no more disruptive than a single repeating tone (one-token). Tones of the nine-token sequence elicited augmented N1 amplitudes compared to identical tones delivered in the one-token sequence, yet deviants elicited an additional component (MMN) with distinct temporal properties and topography. These results suggested that MMN and N1 are separate, functionally distinct components. Implications are discussed for the N1 hypothesis and the changing-state hypothesis of the disruption of serial recall performance by auditory distraction.Peer reviewe

    Neural discrimination of speech sound changes in a variable context occurs irrespective of attention and explicit awareness

    Get PDF
    To process complex stimuli like language, our auditory system must tolerate large acoustic variance, like speaker variability, and still be sensitive enough to discriminate between phonemes and to detect complex sound relationships in, e.g., prosodic cues. Our study determined discrimination of speech sounds in input mimicking natural speech variability, and detection of deviations in regular pitch relationships (rule violations) between speech sounds. We investigated the automaticity and the influence of attention and explicit awareness on these changes by recording the neurophysiological mismatch negativity (MMN) and P3a as well as task performance from 21 adults. The results showed neural discrimination of phonemes and rule violations as indicated by MMN and P3a, regardless of whether the sounds were attended or not, even when participants could not explicitly describe the rule. While small sample size precluded statistical analysis of some outcomes, we still found preliminary associations between the MMN amplitudes, task performance, and emerging explicit awareness of the rule. Our results highlight the automaticity of processing complex aspects of speech as a basis for the emerging conscious perception and explicit awareness of speech properties. While MMN operates at the implicit processing level, P3a appears to work at the borderline of implicit and explicit.Peer reviewe

    More efficient formation of longer-term representations for word forms at birth can be linked to better language skills at 2 years

    Get PDF
    Infants are able to extract words from speech early in life. Here we show that the quality of forming longer-term representations for word forms at birth predicts expressive language ability at the age of two years. Seventy-five neonates were familiarized with two spoken disyllabic pseudowords. We then tested whether the neonate brain predicts the second syllable from the first one by presenting a familiarized pseudoword frequently, and occasionally violating the learned syllable combination by different rare pseudowords. Distinct brain responses were elicited by predicted and unpredicted word endings, suggesting that the neonates had learned the familiarized pseudowords. The difference between responses to predicted and unpredicted pseudowords indexing the quality of word-form learning during familiarization significantly correlated with expressive language scores (the mean length of utterance) at 24 months in the same infant. These findings suggest that 1) neonates can memorize disyllabic words so that a learned first syllable generates predictions for the word ending, and 2) early individual differences in the quality of word-form learning correlate with language skills. This relationship helps early identification of infants at risk for language impairment.Peer reviewe

    Restricted consonant inventories of 2-year-old Finnish children with a history of recurrent acute otitis media

    Get PDF
    Many children experience recurrent acute otitis media (RAOM) in early childhood. In a previous study, 2-year-old children with RAOM were shown to have an immature neural patterns for speech sound discrimination. The present study further investigated the consonant inventories of these same children usign natural speech samples. The results showed that 2-year-old children with RAOM (N=19) produced fewer words and had smaller consonant inventories compared to healthy controls (N=21). In partucular, the number of consonants produced in medial positions of words was restricted. For places and manners of articulation, the most notable difference between the groups was observed for fricatives,which were produces less often by children with RAOM than by the controls.These results further support the asdumption that early and recurrent middle ear infections should be considered a risk factor for language development.</p

    Phonetic training and non-native speech perception - New memory traces evolve in just three days as indexed by the mismatch negativity (MMN) and behavioural measures

    Get PDF
    Language-specific, automatically responding memory traces form the basis for speech sound perception and new neural representations can also evolve for non-native speech categories. The aim of this study was to find out how a three-day phonetic listen-and-repeat training affects speech perception, and whether it generates new memory traces. We used behavioural identification, goodness rating, discrimination, and reaction time tasks together with mismatch negativity (MMN) brain response registrations to determine the training effects on native Finnish speakers. We trained the subjects the voicing contrast in fricative sounds. Fricatives are not differentiated by voicing in Finnish, i.e., voiced fricatives do not belong to the Finnish phonological system. Therefore, they are extremely hard for Finns to learn. However, only after three days of training, the native Finnish subjects had learned to perceive the distinction. The results show striking changes in the MMN response; it was significantly larger on the second day after two training sessions. Also, the majority of the behavioural indicators showed improvement during training. Identification altered after four sessions of training and discrimination and reaction times improved throughout training. These results suggest remarkable language-learning effects both at the perceptual and pre-attentive neural level as a result of brief listen-and-repeat training in adult participants.</p
    • …
    corecore